5 research outputs found

    Automatic Repair of Real Bugs: An Experience Report on the Defects4J Dataset

    Full text link
    Defects4J is a large, peer-reviewed, structured dataset of real-world Java bugs. Each bug in Defects4J is provided with a test suite and at least one failing test case that triggers the bug. In this paper, we report on an experiment to explore the effectiveness of automatic repair on Defects4J. The result of our experiment shows that 47 bugs of the Defects4J dataset can be automatically repaired by state-of- the-art repair. This sets a baseline for future research on automatic repair for Java. We have manually analyzed 84 different patches to assess their real correctness. In total, 9 real Java bugs can be correctly fixed with test-suite based repair. This analysis shows that test-suite based repair suffers from under-specified bugs, for which trivial and incorrect patches still pass the test suite. With respect to practical applicability, it takes in average 14.8 minutes to find a patch. The experiment was done on a scientific grid, totaling 17.6 days of computation time. All their systems and experimental results are publicly available on Github in order to facilitate future research on automatic repair

    Automatic Repair of Real Bugs in Java: A Large-Scale Experiment on the Defects4J Dataset

    Get PDF
    update for oadoi on Nov 02 2018International audienceDefects4J is a large, peer-reviewed, structured dataset of real-world Java bugs. Each bug in Defects4J comes with a test suite and at least one failing test case that triggers the bug. In this paper, we report on an experiment to explore the effectiveness of automatic test-suite based repair on Defects4J. The result of our experiment shows that the considered state-of-the-art repair methods can generate patches for 47 out of 224 bugs. However, those patches are only test-suite adequate, which means that they pass the test suite and may potentially be incorrect beyond the test-suite satisfaction correctness criterion. We have manually analyzed 84 different patches to assess their real correctness. In total, 9 real Java bugs can be correctly repaired with test-suite based repair. This analysis shows that test-suite based repair suffers from under-specified bugs, for which trivial or incorrect patches still pass the test suite. With respect to practical applicability, it takes on average 14.8 minutes to find a patch. The experiment was done on a scientific grid, totaling 17.6 days of computation time. All the repair systems and experimental results are publicly available on Github in order to facilitate future research on automatic repair

    Towards Privacy-Preserving Data Dissemination in Crowd-Sensing Middleware Platform

    Get PDF
    National audienceCrowd-sensing, also known as mobile crowdsourcing, is a growing research topic, which consists in engaging end users in the process of gathering physical measurements in the field. While the democratization of such middleware platforms opens the venue for the observation of phenomenon at scale, it may also raise key issues about the privacy of end users involved in the gathering process. In this paper, we therefore report on our preliminary steps towards enabling privacy-preserving data dissemination in crowd-sensing middleware platforms by adopting a decentralized dissemination strategy to fuzz the contributions of end users. To assess such a strategy, we also report in this paper on our initiative to deploy a large-scale middleware platform to emulate and control a crowd of devices. In particular , we claim that an assessment by simulation is not acceptable for such a kind of contribution and that the mobile engineering community requires a more realistic evaluation testbed

    Automatic Repair of Real Bugs: An Experience Report on the Defects4J Dataset

    No full text
    Automatic software repair aims to reduce human effort for fixing bugs. Various automatic repair approaches have emerged in recent years. In this paper, we report on an experiment on automatically repairing 224 bugs of a real-world and publicly available bug dataset, Defects4J. We investigate the results of three repair methods, GenProg (repair via random search), Kali (repair via exhaustive search), and Nopol (repair via constraint based search). We conduct our investigation with five research questions: fixability, patch correctness, ill-defined bugs, performance, and fault localizability. Our implementations of GenProg, Kali, and Nopol fix together 41 out of 224 (18%) bugs with 59 different patches. This can be viewed as a baseline for future usage of Defects4J for automatic repair research. In addition, manual analysis of sampling 42 of 59 generated patches shows that only 8 patches are undoubtedly correct. This is a novel piece of evidence that there is large room for improvement in the area of test suite based repair
    corecore